-
Notifications
You must be signed in to change notification settings - Fork 6.2k
8373344: Add support for min/max reduction operations for Float16 #28828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This patch adds mid-end support for vectorized min/max reduction operations for half floats. It also includes backend AArch64 support for these operations. Both floating point min/max reductions don’t require strict order, because they are associative. It will generate NEON fminv/fmaxv reduction instructions when max vector length is 8B or 16B. On SVE supporting machines with vector lengths > 16B, it will generate the SVE fminv/fmaxv instructions. The patch also adds support for partial min/max reductions on SVE machines using fminv/fmaxv. Ratio of throughput(ops/ms) > 1 indicates the performance with this patch is better than the mainline. Neoverse N1 (UseSVE = 0, max vector length = 16B): Benchmark vectorDim Mode Cnt 8B 16B ReductionMaxFP16 256 thrpt 9 3.69 6.44 ReductionMaxFP16 512 thrpt 9 3.71 7.62 ReductionMaxFP16 1024 thrpt 9 4.16 8.64 ReductionMaxFP16 2048 thrpt 9 4.44 9.12 ReductionMinFP16 256 thrpt 9 3.69 6.43 ReductionMinFP16 512 thrpt 9 3.70 7.62 ReductionMinFP16 1024 thrpt 9 4.16 8.64 ReductionMinFP16 2048 thrpt 9 4.44 9.10 Neoverse V1 (UseSVE = 1, max vector length = 32B): Benchmark vectorDim Mode Cnt 8B 16B 32B ReductionMaxFP16 256 thrpt 9 3.96 8.62 8.02 ReductionMaxFP16 512 thrpt 9 3.54 9.25 11.71 ReductionMaxFP16 1024 thrpt 9 3.77 8.71 14.07 ReductionMaxFP16 2048 thrpt 9 3.88 8.44 14.69 ReductionMinFP16 256 thrpt 9 3.96 8.61 8.03 ReductionMinFP16 512 thrpt 9 3.54 9.28 11.69 ReductionMinFP16 1024 thrpt 9 3.76 8.70 14.12 ReductionMinFP16 2048 thrpt 9 3.87 8.45 14.70 Neoverse V2 (UseSVE = 2, max vector length = 16B): Benchmark vectorDim Mode Cnt 8B 16B ReductionMaxFP16 256 thrpt 9 4.78 10.00 ReductionMaxFP16 512 thrpt 9 3.74 11.33 ReductionMaxFP16 1024 thrpt 9 3.86 9.59 ReductionMaxFP16 2048 thrpt 9 3.94 8.71 ReductionMinFP16 256 thrpt 9 4.78 10.00 ReductionMinFP16 512 thrpt 9 3.74 11.29 ReductionMinFP16 1024 thrpt 9 3.86 9.58 ReductionMinFP16 2048 thrpt 9 3.94 8.71 Testing: hotspot_all, jdk (tier1-3) and langtools (tier1) all pass on Neoverse N1/V1/V2.
|
👋 Welcome back yiwu0b11! A progress list of the required criteria for merging this PR into |
|
❗ This change is not yet ready to be integrated. |
|
@yiwu0b11 The following labels will be automatically applied to this pull request:
When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing lists. If you would like to change these labels, use the /label pull request command. |
Webrevs
|
galderz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @yiwu0b11, some superficial comments
test/micro/org/openjdk/bench/jdk/incubator/vector/Float16OperationsBenchmark.java
Outdated
Show resolved
Hide resolved
test/micro/org/openjdk/bench/jdk/incubator/vector/Float16OperationsBenchmark.java
Outdated
Show resolved
Hide resolved
test/hotspot/jtreg/compiler/vectorization/TestFloat16VectorOperations.java
Outdated
Show resolved
Hide resolved
| case Op_MinReductionVHF: | ||
| case Op_MaxReductionVHF: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use the NEON instructions if the vector size <= 16B as well for partial cases. Did you test the performance with NEON instead of using predicated SVE instructions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean move it down, like Op_AddReductionVI and Op_AddReductionVL to use return !VM_Version::use_neon_for_vector(length_in_bytes);?
It doesn't to make much of a difference.
Neoverse V1 (UseSVE = 1, max vector length = 32B)
Benchmark vectorDim Mode Cnt 8B(old) 8B(new) chg2/chg1 16B(old) 16B(new) chg2/chg1 32B(old) 32B(new) chg2/chg1
ReductionMaxFP16 256 thrpt 9 3.96 3.96 1.00 8.63 8.62 1.00 8.02 8.02 1.00
ReductionMaxFP16 512 thrpt 9 3.54 3.54 1.00 9.25 9.25 1.00 11.71 11.71 1.00
ReductionMaxFP16 1024 thrpt 9 3.77 3.77 1.00 8.70 8.71 1.00 14.12 14.07 1.00
ReductionMaxFP16 2048 thrpt 9 3.88 3.88 1.00 8.45 8.44 1.00 14.69 14.69 1.00
ReductionMinFP16 256 thrpt 9 3.96 3.96 1.00 8.62 8.61 1.00 8.02 8.03 1.00
ReductionMinFP16 512 thrpt 9 3.55 3.54 1.00 9.26 9.28 1.00 11.72 11.69 1.00
ReductionMinFP16 1024 thrpt 9 3.76 3.76 1.00 8.69 8.70 1.00 14.10 14.12 1.00
ReductionMinFP16 2048 thrpt 9 3.87 3.87 1.00 8.44 8.45 1.00 14.76 14.70 1.00
Neoverse V2 (UseSVE = 2, max vector length = 16B)
Benchmark vectorDim Mode Cnt 8B(old) 8B(new) chg2/chg1 16B(old) 16B(new) chg2/chg1
ReductionMaxFP16 256 thrpt 9 4.77 4.78 1.00 10.00 10.00 1.00
ReductionMaxFP16 512 thrpt 9 3.75 3.74 1.00 11.32 11.33 1.00
ReductionMaxFP16 1024 thrpt 9 3.87 3.86 1.00 9.59 9.59 1.00
ReductionMaxFP16 2048 thrpt 9 3.94 3.94 1.00 8.72 8.71 1.00
ReductionMinFP16 256 thrpt 9 4.77 4.78 1.00 9.97 10.00 1.00
ReductionMinFP16 512 thrpt 9 3.77 3.74 0.99 11.35 11.29 0.99
ReductionMinFP16 1024 thrpt 9 3.86 3.86 1.00 9.56 9.58 1.00
ReductionMinFP16 2048 thrpt 9 3.94 3.94 1.00 8.71 8.71 1.00
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean move it down, like Op_AddReductionVI and Op_AddReductionVL to use return !VM_Version::use_neon_for_vector(length_in_bytes);?
Yes, that was what I mean.
It doesn't to make much of a difference.
So what does 8B/16B/32B mean? I guess it means the real vector size of the reduction operation? But how did you test these cases, as I noticed the code of benchmarks do not have any parallelization differences. Is the vectorization factor decided by using different MaxVectorSize vm option ? If so, then I think the partial cases are not touched. Could you please check whether instruction of VectorMaskGenNode is generated from the generated code? I assume there should be difference, because for partial cases (vector_size < MaxVectorSize), it uses the SVE predicated instructions before, while it uses NEON instructions after. And the instruction latency/throughput of SVE reduction are much worse than NEON ones.
This patch adds mid-end support for vectorized min/max reduction operations for half floats. It also includes backend AArch64 support for these operations.
Both floating point min/max reductions don’t require strict order, because they are associative.
It will generate NEON fminv/fmaxv reduction instructions when max vector length is 8B or 16B. On SVE supporting machines with vector lengths > 16B, it will generate the SVE fminv/fmaxv instructions.
The patch also adds support for partial min/max reductions on SVE machines using fminv/fmaxv.
Ratio of throughput(ops/ms) > 1 indicates the performance with this patch is better than the mainline.
Neoverse N1 (UseSVE = 0, max vector length = 16B):
Neoverse V1 (UseSVE = 1, max vector length = 32B):
Neoverse V2 (UseSVE = 2, max vector length = 16B):
Testing:
hotspot_all, jdk (tier1-3) and langtools (tier1) all pass on Neoverse N1/V1/V2.
Progress
Issue
Reviewing
Using
gitCheckout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/28828/head:pull/28828$ git checkout pull/28828Update a local copy of the PR:
$ git checkout pull/28828$ git pull https://git.openjdk.org/jdk.git pull/28828/headUsing Skara CLI tools
Checkout this PR locally:
$ git pr checkout 28828View PR using the GUI difftool:
$ git pr show -t 28828Using diff file
Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/28828.diff
Using Webrev
Link to Webrev Comment